235 research outputs found

    Gender recognition from facial images: Two or three dimensions?

    Get PDF
    © 2016 Optical Society of America. This paper seeks to compare encoded features from both two-dimensional (2D) and three-dimensional (3D) face images in order to achieve automatic gender recognition with high accuracy and robustness. The Fisher vector encoding method is employed to produce 2D, 3D, and fused features with escalated discriminative power. For 3D face analysis, a two-source photometric stereo (PS) method is introduced that enables 3D surface reconstructions with accurate details as well as desirable efficiency. Moreover, a 2D + 3D imaging device, taking the two-source PS method as its core, has been developed that can simultaneously gather color images for 2D evaluations and PS images for 3D analysis. This system inherits the superior reconstruction accuracy from the standard (three or more light) PS method but simplifies the reconstruction algorithm as well as the hardware design by only requiring two light sources. It also offers great potential for facilitating human computer interaction by being accurate, cheap, efficient, and nonintrusive. Ten types of low-level 2D and 3D features have been experimented with and encoded for Fisher vector gender recognition. Evaluations of the Fisher vector encoding method have been performed on the FERET database, Color FERET database, LFW database, and FRGCv2 database, yielding 97.7%, 98.0%, 92.5%, and 96.7% accuracy, respectively. In addition, the comparison of 2D and 3D features has been drawn from a self-collected dataset, which is constructed with the aid of the 2D + 3D imaging device in a series of data capture experiments. With a variety of experiments and evaluations, it can be proved that the Fisher vector encoding method outperforms most state-of-the-art gender recognition methods. It has also been observed that 3D features reconstructed by the two-source PS method are able to further boost the Fisher vector gender recognition performance, i.e., up to a 6% increase on the self-collected database

    A efficient and practical 3D face scanner using near infrared and visible photometric stereo

    Get PDF
    AbstractThis paper is concerned with the acquisition of model data for automatic 3D face recognition applications. As 3D methods become progressively more popular in face recognition research, the need for fast and accurate data capture has become crucial. This paper is motivated by this need and offers three primary contributions. Firstly, the paper demonstrates that four-source photometric stereo offers a potential means for data capture that is computationally nd financially viable and easily deployable in commercial settings. We have shown that both visible light and less ntrusive near infrared light is suitable for facial illumination. The second contribution is a detailed set of experimental esults that compare the accuracy of the device to ground truth, which was captured using a commercial projected pattern range finder. Importantly, we show that not only is near infrared light a valid alternative to the more commonly xploited visible light, but that it actually gives more accurate reconstructions. Finally, we assess the validity of the Lambertian assumption on skin reflectance data and show that better results may be obtained by incorporating more dvanced reflectance functions, such as the Oren–Nayar model

    Magnetic field waves at Uranus

    Get PDF
    The proposed research efforts funded by the UDAP grant to the BRI involve the study of magnetic field waves associated with the Uranian bow shock. This is a collaborative venture bringing together investigators at the BRI, Southwest Research Institute (SwRI), and Goddard Space Flight Center (GSFC). In addition, other collaborations have been formed with investigators granted UDAP funds for similar studies and with investigators affiliated with other Voyager experiments. These investigations and the corresponding collaborations are included in the report. The proposed effort as originally conceived included an examination of waves downstream from the shock within the magnetosheath. However, the observations of unexpected complexity and diversity within the upstream region have necessitated that we confine our efforts to those observations recorded upstream of the bow shock on the inbound and outbound legs of the encounter by the Voyager 2 spacecraft

    Towards on-farm pig face recognition using convolutional neural networks

    Get PDF
    © 2018 Elsevier B.V. Identification of individual livestock such as pigs and cows has become a pressing issue in recent years as intensification practices continue to be adopted and precise objective measurements are required (e.g. weight). Current best practice involves the use of RFID tags which are time-consuming for the farmer and distressing for the animal to fit. To overcome this, non-invasive biometrics are proposed by using the face of the animal. We test this in a farm environment, on 10 individual pigs using three techniques adopted from the human face recognition literature: Fisherfaces, the VGG-Face pre-trained face convolutional neural network (CNN) model and our own CNN model that we train using an artificially augmented data set. Our results show that accurate individual pig recognition is possible with accuracy rates of 96.7% on 1553 images. Class Activated Mapping using Grad-CAM is used to show the regions that our network uses to discriminate between pigs

    Scale-dependence of magnetic helicity in the solar wind

    Full text link
    We determine the magnetic helicity, along with the magnetic energy, at high latitudes using data from the Ulysses mission. The data set spans the time period from 1993 to 1996. The basic assumption of the analysis is that the solar wind is homogeneous. Because the solar wind speed is high, we follow the approach first pioneered by Matthaeus et al. (1982, Phys. Rev. Lett. 48, 1256) by which, under the assumption of spatial homogeneity, one can use Fourier transforms of the magnetic field time series to construct one-dimensional spectra of the magnetic energy and magnetic helicity under the assumption that the Taylor frozen-in-flow hypothesis is valid. That is a well-satisfied assumption for the data used in this study. The magnetic helicity derives from the skew-symmetric terms of the three-dimensional magnetic correlation tensor, while the symmetric terms of the tensor are used to determine the magnetic energy spectrum. Our results show a sign change of magnetic helicity at wavenumber k~2 AU^{-1} (or frequency nu~2 uHz) at distances below 2.8 AU and at k~30 AU^{-1} (or nu~25 uHz) at larger distances. At small scales the magnetic helicity is positive at northern heliographic latitudes and negative at southern latitudes. The positive magnetic helicity at small scales is argued to be the result of turbulent diffusion reversing the sign relative to what is seen at small scales at the solar surface. Furthermore, the magnetic helicity declines toward solar minimum in 1996. The magnetic helicity flux integrated separately over one hemisphere amounts to about 10^{45} Mx^2/cycle at large scales and to a 3 times lower value at smaller scales.Comment: 8 pages, 6 figures, ApJ (in press

    Vanishing point detection for visual surveillance systems in railway platform environments

    Get PDF
    © 2018 Elsevier B.V. Visual surveillance is of paramount importance in public spaces and especially in train and metro platforms which are particularly susceptible to many types of crime from petty theft to terrorist activity. Image resolution of visual surveillance systems is limited by a trade-off between several requirements such as sensor and lens cost, transmission bandwidth and storage space. When image quality cannot be improved using high-resolution sensors, high-end lenses or IR illumination, the visual surveillance system may need to increase the resolving power of the images by software to provide accurate outputs such as, in our case, vanishing points (VPs). Despite having numerous applications in camera calibration, 3D reconstruction and threat detection, a general method for VP detection has remained elusive. Rather than attempting the infeasible task of VP detection in general scenes, this paper presents a novel method that is fine-tuned to work for railway station environments and is shown to outperform the state-of-the-art for that particular case. In this paper, we propose a three-stage approach to accurately detect the main lines and vanishing points in low-resolution images acquired by visual surveillance systems in indoor and outdoor railway platform environments. First, several frames are used to increase the resolving power through a multi-frame image enhancer. Second, an adaptive edge detection is performed and a novel line clustering algorithm is then applied to determine the parameters of the lines that converge at VPs; this is based on statistics of the detected lines and heuristics about the type of scene. Finally, vanishing points are computed via a voting system to optimize detection in an attempt to omit spurious lines. The proposed approach is very robust since it is not affected by ever-changing illumination and weather conditions of the scene, and it is immune to vibrations. Accurate and reliable vanishing point detection provides very valuable information, which can be used to aid camera calibration, automatic scene understanding, scene segmentation, semantic classification or augmented reality in platform environments

    Magnetic field waves at Uranus

    Get PDF
    The research efforts funded by the Uranus Data Analysis Program (UDAP) grant to the Bartol Research Institute (BRI) involved the study of magnetic field waves associated with the Uranian bow shock. Upstream wave studies are motivated as a study of the physics of collisionless shocks. Collisionless shocks in plasmas are capable of 'reflecting' a fraction of the incoming thermal particle distribution and directing the resulting energetic particle motion back into the upstream region. Once within the upstream region, the backward streaming energetic particles convey information of the approaching shock to the supersonic flow. This particle population is responsible for the generation of upstream magnetic and electrostatic fluctuations known as 'upstream waves', for slowing the incoming wind prior to the formation of the shock ramp, and for heating of the upstream plasma. The waves produced at Uranus not only differed in several regards from the observations at other planetary bow shocks, but also gave new information regarding the nature of the reflected particle populations which were largely unmeasurable by the particle instruments. Four distinct magnetic field wave types were observed upstream of the Uranian bow shock: low-frequency Alfven or fast magnetosonic waves excited by energetic protons originating at or behind the bow shock; whistler wave bursts driven by gyrating ion distributions within the shock ramp; and two whistler wave types simultaneously observed upstream of the flanks of the shock and argued to arise from resonance with energetic electrons. In addition, observations of energetic particle distributions by the LECP experiment, thermal particle populations observed by the PLS experiment, and electron plasma oscillations recorded by the PWS experiment proved instrumental to this study and are included to some degree in the papers and presentations supported by this grant

    Towards machine vision for insect welfare monitoring and behavioural insights

    Get PDF
    Machine vision has demonstrated its usefulness in the livestock industry in terms of improving welfare in such areas as lameness detection and body condition scoring in dairy cattle. In this article, we present some promising results of applying state of the art object detection and classification techniques to insects, specifically Black Soldier Fly (BSF) and the domestic cricket, with the view of enabling automated processing for insect farming. We also present the low-cost “Insecto” Internet of Things (IoT) device, which provides environmental condition monitoring for temperature, humidity, CO2, air pressure, and volatile organic compound levels together with high resolution image capture. We show that we are able to accurately count and measure size of BSF larvae and also classify the sex of domestic crickets by detecting the presence of the ovipositor. These early results point to future work for enabling automation in the selection of desirable phenotypes for subsequent generations and for providing early alerts should environmental conditions deviate from desired values

    Eye center localization and gaze gesture recognition for human-computer interaction

    Get PDF
    © 2016 Optical Society of America. This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications

    Towards facial expression recognition for on-farm welfare assessment in pigs

    Get PDF
    Animal welfare is not only an ethically important consideration in good animal husbandry but can also have a significant effect on an animal’s productivity. The aim of this paper was to show that a reduction in animal welfare, in the form of increased stress, can be identified in pigs from frontal images of the animals. We trained a convolutional neural network (CNN) using a leave-one-out design and showed that it is able to discriminate between stressed and unstressed pigs with an accuracy of >90% in unseen animals. Grad-CAM was used to identify the animal regions used, and these supported those used in manual assessments such as the Pig Grimace Scale. This innovative work paves the way for further work examining both positive and negative welfare states with the aim of developing an automated system that can be used in precision livestock farming to improve animal welfare
    • …
    corecore